807 research outputs found

    Speech development: toddlers don't mind getting it wrong.

    Get PDF
    A recent study has found that toddlers do not compensate for an artificial alteration in a vowel they hear themselves producing. This raises questions about how young children learn speech sounds

    Creating the cognitive form of phonological units: The speech sound correspondence problem in infancy could be solved by mirrored vocal interactions rather than by imitation

    Get PDF
    Theories about the cognitive nature of phonological units have been constrained by the assumption that young children solve the correspondence problem for speech sounds by imitation, whether by an auditory- or gesture-based matching to target process. Imitation on the part of the child implies that he makes a comparison within one of these domains, which is presumed to be the modality of the underlying representation of speech sounds. However, there is no evidence that the correspondence problem is solved in this way. Instead we argue that the child can solve it through the mirroring behaviour of his caregivers within imitative interactions and that this mechanism is more consistent with the developmental data. The underlying representation formed by mirroring is intrinsically perceptuo-motor. It is created by the association of a vocal action performed by the child and the reformulation of this into an L1 speech token that he hears in return. Our account of how production and perception develop incorporating this mechanism explains some longstanding problems in speech and reconciles data from psychology and neuroscience

    Modeling the development of pronunciation in infant speech acquisition.

    Get PDF
    Pronunciation is an important part of speech acquisition, but little attention has been given to the mechanism or mechanisms by which it develops. Speech sound qualities, for example, have just been assumed to develop by simple imitation. In most accounts this is then assumed to be by acoustic matching, with the infant comparing his output to that of his caregiver. There are theoretical and empirical problems with both of these assumptions, and we present a computational model- Elija-that does not learn to pronounce speech sounds this way. Elija starts by exploring the sound making capabilities of his vocal apparatus. Then he uses the natural responses he gets from a caregiver to learn equivalence relations between his vocal actions and his caregiver's speech. We show that Elija progresses from a babbling stage to learning the names of objects. This demonstrates the viability of a non-imitative mechanism in learning to pronounce

    Two-level recognition of isolated word using neural nets

    Get PDF
    Describes a neural-net based isolated word recogniser that has a better performance on a standard multi-speaker database than the reference hidden Markov model recogniser. The complete neural net recogniser is formed from two parts: a front-end which transforms the complex acoustic specification of the speech into a simplified phonetic feature specification, and a whole-word discriminator net. Each level was trained separately, thus considerably reducing the time necessary to train the overall system

    Characterization of Neural Tuning: Visual Lead-in Movements Generalize in Speed and Distance

    Get PDF
    Prior work has shown that independent motor memories of opposing dynamics can be learned when the movements are preceded by unique lead-in movements, each associated with a different direction of dynamics. Here we examine generalization effects using visual lead-in movements. Specifically, we test how variations in lead-in kinematics, in terms of duration, speed and distance, effect the expression of the learned motor memory. We show that the motor system is more strongly affected by changes in the duration of the movement, whereas longer movement distances have no effect

    The effect of contextual cues on the encoding of motor memories.

    Get PDF
    Several studies have shown that sensory contextual cues can reduce the interference observed during learning of opposing force fields. However, because each study examined a small set of cues, often in a unique paradigm, the relative efficacy of different sensory contextual cues is unclear. In the present study we quantify how seven contextual cues, some investigated previously and some novel, affect the formation and recall of motor memories. Subjects made movements in a velocity-dependent curl field, with direction varying randomly from trial to trial but always associated with a unique contextual cue. Linking field direction to the cursor or background color, or to peripheral visual motion cues, did not reduce interference. In contrast, the orientation of a visual object attached to the hand cursor significantly reduced interference, albeit by a small amount. When the fields were associated with movement in different locations in the workspace, a substantial reduction in interference was observed. We tested whether this reduction in interference was due to the different locations of the visual feedback (targets and cursor) or the movements (proprioceptive). When the fields were associated only with changes in visual display location (movements always made centrally) or only with changes in the movement location (visual feedback always displayed centrally), a substantial reduction in interference was observed. These results show that although some visual cues can lead to the formation and recall of distinct representations in motor memory, changes in spatial visual and proprioceptive states of the movement are far more effective than changes in simple visual contextual cues

    Composition and decomposition in bimanual dynamic learning.

    Get PDF
    Our ability to skillfully manipulate an object often involves the motor system learning to compensate for the dynamics of the object. When the two arms learn to manipulate a single object they can act cooperatively, whereas when they manipulate separate objects they control each object independently. We examined how learning transfers between these two bimanual contexts by applying force fields to the arms. In a coupled context, a single dynamic is shared between the arms, and in an uncoupled context separate dynamics are experienced independently by each arm. In a composition experiment, we found that when subjects had learned uncoupled force fields they were able to transfer to a coupled field that was the sum of the two fields. However, the contribution of each arm repartitioned over time so that, when they returned to the uncoupled fields, the error initially increased but rapidly reverted to the previous level. In a decomposition experiment, after subjects learned a coupled field, their error increased when exposed to uncoupled fields that were orthogonal components of the coupled field. However, when the coupled field was reintroduced, subjects rapidly readapted. These results suggest that the representations of dynamics for uncoupled and coupled contexts are partially independent. We found additional support for this hypothesis by showing significant learning of opposing curl fields when the context, coupled versus uncoupled, was alternated with the curl field direction. These results suggest that the motor system is able to use partially separate representations for dynamics of the two arms acting on a single object and two arms acting on separate objects

    Context-dependent partitioning of motor learning in bimanual movements.

    Get PDF
    Human subjects easily adapt to single dynamic or visuomotor perturbations. In contrast, when two opposing dynamic or visuomotor perturbations are presented sequentially, interference is often observed. We examined the effect of bimanual movement context on interference between opposing perturbations using pairs of contexts, in which the relative direction of movement between the two arms was different across the pair. When each perturbation direction was associated with a different bimanual context, such as movement of the arms in the same direction versus movement in the opposite direction, interference was dramatically reduced. This occurred over a short period of training and was seen for both dynamic and visuomotor perturbations, suggesting a partitioning of motor learning for the different bimanual contexts. Further support for this was found in a series of transfer experiments. Having learned a single dynamic or visuomotor perturbation in one bimanual context, subjects showed incomplete transfer of this learning when the context changed, even though the perturbation remained the same. In addition, we examined a bimanual context in which one arm was moved passively and show that the reduction in interference requires active movement. The sensory consequences of movement are thus insufficient to allow opposing perturbations to be co-represented. Our results suggest different bimanual movement contexts engage at least partially separate representations of dynamics and kinematics in the motor system

    A Modular 3D-Printed Inverted Pendulum

    Get PDF

    Design and prototyping of a low-cost light weight fixed-endpoint orientation planar Cobot

    Get PDF
    Here we present the design and construction of a low-cost planar robotic arm that makes use of light weight component and a passive link mechanism to maintain fixed endpoint orientation. The arm structure itself is low-cost and built from carbon fiber tubes which yields a high stiffness to weight ratio. To facilitate construction, commercially available pulley and bearing components are used in the design where possible and all custom mechanical parts are 3D printed. To reduce power consumption, the arm makes use of non-back-drivable worm-gear motor actuation, so static arm configurations can be maintained without requiring motor power. We first analyze and simulate the kinematics and the static torque/force relationships of the mechanism. A microcontroller system was then developed to read the sensors and drive the arm motors. Finally, we demonstrate arm operation with simple movement tasks
    • …
    corecore